Federated learning has recently been applied to recommendation systems to protect user privacy. In federated learning settings, recommendation systems can train recommendation models only collecting the intermediate parameters instead of the real user data, which greatly enhances the user privacy. Beside, federated recommendation systems enable to collaborate with other data platforms to improve recommended model performance while meeting the regulation and privacy constraints. However, federated recommendation systems faces many new challenges such as privacy, security, heterogeneity and communication costs. While significant research has been conducted in these areas, gaps in the surveying literature still exist. In this survey, we-(1) summarize some common privacy mechanisms used in federated recommendation systems and discuss the advantages and limitations of each mechanism; (2) review some robust aggregation strategies and several novel attacks against security; (3) summarize some approaches to address heterogeneity and communication costs problems; (4)introduce some open source platforms that can be used to build federated recommendation systems; (5) present some prospective research directions in the future. This survey can guide researchers and practitioners understand the research progress in these areas.
translated by 谷歌翻译
Video super-resolution is one of the most popular tasks on mobile devices, being widely used for an automatic improvement of low-bitrate and low-resolution video streams. While numerous solutions have been proposed for this problem, they are usually quite computationally demanding, demonstrating low FPS rates and power efficiency on mobile devices. In this Mobile AI challenge, we address this problem and propose the participants to design an end-to-end real-time video super-resolution solution for mobile NPUs optimized for low energy consumption. The participants were provided with the REDS training dataset containing video sequences for a 4X video upscaling task. The runtime and power efficiency of all models was evaluated on the powerful MediaTek Dimensity 9000 platform with a dedicated AI processing unit capable of accelerating floating-point and quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 500 FPS rate and 0.2 [Watt / 30 FPS] power consumption. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
在这项工作中,我们提出了一个置换不变的语言模型Symphonynet,作为象征性交响音乐生成的解决方案。我们建议使用基于变压器的自动回归语言模型具有特定的3-D位置嵌入的新型多通道可重复的多磁场(MMR)表示,并模拟音乐序列。为了克服长度溢出在建模超长的交响令牌时,我们还提出了一对经过修改的字节对编码算法(音乐bpe)用于音乐令牌,并引入了一种新颖的线性变压器解码器架构作为骨干。同时,我们通过从输入中掩盖仪器信息来训练解码器将自动编排作为联合任务学习。我们还引入了一个大规模的符号交响数据集,以进行交响曲生成研究的发展。经验结果表明,所提出的方法可以产生连贯,新颖,复杂和和谐的交响曲,作为多轨多训练符号音乐生成的先驱解决方案。
translated by 谷歌翻译
人类行动识别(HAR)旨在理解人类行为并为每个行动分配标签。它具有广泛的应用程序,因此在计算机视觉领域引起了越来越多的关注。可以使用各种数据模式来代表人类的行动,例如RGB,骨骼,深度,红外,点云,事件流,音频,加速,雷达和WiFi信号,它们编码有用但不同信息的不同来源,并具有各种优势,取决于不同在应用程序方案。因此,许多现有作品都试图使用各种方式研究HAR的不同类型的方法。在本文中,我们根据输入数据模式的类型进行了对HAR深度学习方法的最新进展的全面调查。具体而言,我们回顾了当前的单个数据模式和多种数据模式的主流深度学习方法,包括基于融合的基于融合和基于共学习的框架。我们还在HAR的几个基准数据集上提出了比较结果,以及有见地的观察结果并激发了未来的研究方向。
translated by 谷歌翻译
With the fast development of big data, it has been easier than before to learn the optimal decision rule by updating the decision rule recursively and making online decisions. We study the online statistical inference of model parameters in a contextual bandit framework of sequential decision-making. We propose a general framework for online and adaptive data collection environment that can update decision rules via weighted stochastic gradient descent. We allow different weighting schemes of the stochastic gradient and establish the asymptotic normality of the parameter estimator. Our proposed estimator significantly improves the asymptotic efficiency over the previous averaged SGD approach via inverse probability weights. We also conduct an optimality analysis on the weights in a linear regression setting. We provide a Bahadur representation of the proposed estimator and show that the remainder term in the Bahadur representation entails a slower convergence rate compared to classical SGD due to the adaptive data collection.
translated by 谷歌翻译
Denoising Diffusion Probabilistic Models (DDPMs) are emerging in text-to-speech (TTS) synthesis because of their strong capability of generating high-fidelity samples. However, their iterative refinement process in high-dimensional data space results in slow inference speed, which restricts their application in real-time systems. Previous works have explored speeding up by minimizing the number of inference steps but at the cost of sample quality. In this work, to improve the inference speed for DDPM-based TTS model while achieving high sample quality, we propose ResGrad, a lightweight diffusion model which learns to refine the output spectrogram of an existing TTS model (e.g., FastSpeech 2) by predicting the residual between the model output and the corresponding ground-truth speech. ResGrad has several advantages: 1) Compare with other acceleration methods for DDPM which need to synthesize speech from scratch, ResGrad reduces the complexity of task by changing the generation target from ground-truth mel-spectrogram to the residual, resulting into a more lightweight model and thus a smaller real-time factor. 2) ResGrad is employed in the inference process of the existing TTS model in a plug-and-play way, without re-training this model. We verify ResGrad on the single-speaker dataset LJSpeech and two more challenging datasets with multiple speakers (LibriTTS) and high sampling rate (VCTK). Experimental results show that in comparison with other speed-up methods of DDPMs: 1) ResGrad achieves better sample quality with the same inference speed measured by real-time factor; 2) with similar speech quality, ResGrad synthesizes speech faster than baseline methods by more than 10 times. Audio samples are available at https://resgrad1.github.io/.
translated by 谷歌翻译
Is it possible for a first-order method, i.e., only first derivatives allowed, to be quadratically convergent? For univariate loss functions, the answer is yes -- the Steffensen method avoids second derivatives and is still quadratically convergent like Newton method. By incorporating an optimal step size we can even push its convergence order beyond quadratic to $1+\sqrt{2} \approx 2.414$. While such high convergence orders are a pointless overkill for a deterministic algorithm, they become rewarding when the algorithm is randomized for problems of massive sizes, as randomization invariably compromises convergence speed. We will introduce two adaptive learning rates inspired by the Steffensen method, intended for use in a stochastic optimization setting and requires no hyperparameter tuning aside from batch size. Extensive experiments show that they compare favorably with several existing first-order methods. When restricted to a quadratic objective, our stochastic Steffensen methods reduce to randomized Kaczmarz method -- note that this is not true for SGD or SLBFGS -- and thus we may also view our methods as a generalization of randomized Kaczmarz to arbitrary objectives.
translated by 谷歌翻译
在体育视频中跟踪多个运动员是一项非常具有挑战性的多对象跟踪(MOT)任务,因为运动员通常具有相同的外观并且彼此密切相同,因此使常见的遮挡问题成为一个令人讨厌的重复检测。在本文中,重复检测是新的,精确地定义为闭塞,通过一帧在多个检测箱上在同一运动员上误会。为了解决这个问题,我们精心设计了一种基于变压器的新型副本检测器(d $^3 $),用于培训,以及一种特定的算法拉力赛 - 亨加利亚(RH)进行匹配。一旦发生重复检测,D $^3 $立即通过生成增强框损耗来修改过程。由团队运动替代规则触发的RH极为适合体育视频。此外,为了补充没有拍摄更改的跟踪数据集,我们根据名为RallyTrack的体育视频发布了一个新数据集。在RallyTrack上进行了广泛的实验表明,将D $^3 $和RH结合起来,可以通过MOTA中的9.2和4.5在Hota中大幅提高跟踪性能。同时,关于Mot系列和Dancetrack的实验发现,D $^3 $可以在训练过程中加速融合,尤其是在MOT17上节省多达80%的原始培训时间。最后,我们的模型只能通过排球视频进行培训,可以直接应用于MAT的篮球和足球视频,该视频显示了我们方法的优先级。我们的数据集可从https://github.com/heruihr/rallytrack获得。
translated by 谷歌翻译
自我训练在半监督学习中表现出巨大的潜力。它的核心思想是使用在标记数据上学习的模型来生成未标记样本的伪标签,然后自我教学。为了获得有效的监督,主动尝试通常会采用动量老师进行伪标签的预测,但要观察确认偏见问题,在这种情况下,错误的预测可能会提供错误的监督信号并在培训过程中积累。这种缺点的主要原因是,现行的自我训练框架充当以前的知识指导当前状态,因为老师仅与过去的学生更新。为了减轻这个问题,我们提出了一种新颖的自我训练策略,该策略使模型可以从未来学习。具体而言,在每个培训步骤中,我们都会首先优化学生(即,在不将其应用于模型权重的情况下缓存梯度),然后用虚拟未来的学生更新老师,最后要求老师为伪标记生产伪标签目前的学生作为指导。这样,我们设法提高了伪标签的质量,从而提高了性能。我们还通过深入(FST-D)和广泛(FST-W)窥视未来,开发了我们未来自我训练(FST)框架的两个变体。将无监督的域自适应语义分割和半监督语义分割的任务作为实例,我们在广泛的环境下实验表明了我们方法的有效性和优越性。代码将公开可用。
translated by 谷歌翻译
这项工作扩展了遗传指纹欺骗的先前进步,并引入了多样性和新颖的大师。该系统使用质量多样性进化算法来生成人造印刷的字典,重点是增加数据集对用户的覆盖范围。多样性大师图的重点是生成与以前发现的印刷品未涵盖的用户匹配的解决方案印刷品,而新颖的主版印刷明确地搜索了与以前的印刷品相比,在用户空间中更多的印刷品。我们的多印刷搜索方法在覆盖范围和概括方面都优于奇异的深层印刷,同时保持指纹图像输出的质量。
translated by 谷歌翻译